From: Tim Deegan Date: Mon, 20 Jun 2011 12:16:14 +0000 (+0100) Subject: x86/mm/shadow: adjust early-unshadow heuristic for PAE guests. X-Git-Url: https://dgit.raspbian.org/%22http:/www.example.com/cgi/%22https:/%22bookmarks://%22Dat/%22http:/www.example.com/cgi/%22https:/%22bookmarks:/%22Dat?a=commitdiff_plain;h=43d588d6ae73cf75f5f7efa3a98bfa243e7e6424;p=xen.git x86/mm/shadow: adjust early-unshadow heuristic for PAE guests. PAE guests have 8-byte PTEs but tend to clear memory with 4-byte writes. This means that when zeroing a former pagetable every second 4-byte write is unaligned and so the consecutive-zeroes --> unshadow heuristic never kicks in. Adjust the heuristic not to reset when a write is >= 4 bytes and writing zero but not PTE-aligned. Signed-off-by: Tim Deegan --- diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 40b5b6e961..aec511b410 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4918,11 +4918,14 @@ static void emulate_unmap_dest(struct vcpu *v, ASSERT(mfn_valid(sh_ctxt->mfn1)); /* If we are writing lots of PTE-aligned zeros, might want to unshadow */ - if ( likely(bytes >= 4) - && (*(u32 *)addr == 0) - && ((unsigned long) addr & ((sizeof (guest_intpte_t)) - 1)) == 0 ) - check_for_early_unshadow(v, sh_ctxt->mfn1); - else + if ( likely(bytes >= 4) && (*(u32 *)addr == 0) ) + { + if ( ((unsigned long) addr & ((sizeof (guest_intpte_t)) - 1)) == 0 ) + check_for_early_unshadow(v, sh_ctxt->mfn1); + /* Don't reset the heuristic if we're writing zeros at non-aligned + * addresses, otherwise it doesn't catch REP MOVSD on PAE guests */ + } + else reset_early_unshadow(v); /* We can avoid re-verifying the page contents after the write if: